The Kemeny constant of a Markov chain
نویسندگان
چکیده
Given an ergodic finite-state Markov chain, let Miw denote the mean time from i to equilibrium, meaning the expected time, starting from i, to arrive at a state selected randomly according to the equilibrium measure w of the chain. John Kemeny observed that Miw does not depend on starting the point i. The common value K = Miw is the Kemeny constant or seek time of the chain. K is a spectral invariant, to wit, the trace of the resolvent matrix. We review basic facts about the seek time, and connect it to the bus paradox and the Central Limit Theorem for ergodic Markov chains. For J. Laurie Snell The seek time We begin by reviewing basic facts and establishing notation for Markov chains. For background, see Kemeny and Snell [4] or Grinstead and Snell [3], bearing in mind that the notation here is somewhat different. Let P be the transition matrix of an ergodic finite-state Markov chain. We write the entries of P using tensor notation, with P j i being the probability that from state i we move to state j. (There is some possibility ∗Copyright (C) 2009 Peter G. Doyle. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, as published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
منابع مشابه
On the Kemeny constant and stationary distribution vector for a Markov chain
Suppose that A is an irreducible stochastic matrix of order n, and denote its eigenvalues by 1, λ2, . . . , λn. The Kemeny constant, K(A) for the Markov chain associated with A is defined as K(A) = ∑n j=2 1 1−λj , and can be interpreted as the mean first passage from an unknown initial state to an unknown destination state in the Markov chain. Let w denote the stationary distribution vector for...
متن کاملEla on the Kemeny Constant and Stationary Distribution Vector for a Markov Chain
Suppose that A is an irreducible stochastic matrix of order n, and denote its eigenvalues by 1, λ2, . . . , λn. The Kemeny constant, K(A) for the Markov chain associated with A is defined as K(A) = ∑n j=2 1 1−λj , and can be interpreted as the mean first passage from an unknown initial state to an unknown destination state in the Markov chain. Let w denote the stationary distribution vector for...
متن کاملThe Kemeny Constant for Finite Homogeneous Ergodic Markov Chains
A quantity known as the Kemeny constant, which is used to measure the expected number of links that a surfer on the World Wide Web, located on a random web page, needs to follow before reaching his/her desired location, coincides with the more well known notion of the expected time to mixing, i.e., to reaching stationarity of an ergodic
متن کاملScaling laws for consensus protocols subject to noise
We study the performance of discrete-time consensus protocols in the presence of additive noise. When the consensus dynamic corresponds to a reversible Markov chain, we give an exact expression for a weighted version of steady-state disagreement in terms of the stationary distribution and hitting times in an underlying graph. We then show how this result can be used to characterize the noise ro...
متن کاملRelative Entropy Rate between a Markov Chain and Its Corresponding Hidden Markov Chain
In this paper we study the relative entropy rate between a homogeneous Markov chain and a hidden Markov chain defined by observing the output of a discrete stochastic channel whose input is the finite state space homogeneous stationary Markov chain. For this purpose, we obtain the relative entropy between two finite subsequences of above mentioned chains with the help of the definition of...
متن کامل